Learning Decision Trees with Stochastic Linear Classifiers
ثبت نشده
چکیده
We consider learning decision trees in the boosting framework, where we assume that the classifiers in each internal node come from a hypothesis class HI which satisfies the weak learning assumption. In this work we consider the class of stochastic linear classifiers for HI , and derive efficient algorithms for minimizing the Gini index for this class, although the problem is non-convex. This implies an efficient decision tree learning algorithm, which also has a theoretical guarantee under the weak stochastic hypothesis assumption.
منابع مشابه
A comparison of stacking with MDTs to bagging, boosting, and other stacking methods
In this paper, we present an integration of the algorithm MLC4.5 for learning meta decision trees (MDTs) into the Weka data mining suite. MDTs are a method for combining multiple classifiers. Instead of giving a prediction, MDT leaves specify which classifier should be used to obtain a prediction. The algorithm is based on the C4.5 algorithm for learning ordinary decision trees. An extensive pe...
متن کاملEnhancing Multi-Class Classification of Random Forest using Random Vector Functional Neural Network and Oblique Decision Surfaces
Both neural networks and decision trees are popular machine learning methods and are widely used to solve problems from diverse domains. These two classifiers are commonly used base classifiers in an ensemble framework. In this paper, we first present a new variant of oblique decision tree based on a linear classifier, then construct an ensemble classifier based on the fusion of a fast neural n...
متن کاملApplication of ensemble learning techniques to model the atmospheric concentration of SO2
In view of pollution prediction modeling, the study adopts homogenous (random forest, bagging, and additive regression) and heterogeneous (voting) ensemble classifiers to predict the atmospheric concentration of Sulphur dioxide. For model validation, results were compared against widely known single base classifiers such as support vector machine, multilayer perceptron, linear regression and re...
متن کاملA comparison of stacking with meta decision trees to other combining methods
Meta decision trees (MDTs) are a method for combining multiple classifiers. We present an integration of the algorithm MLC4.5 for learning MDTs into the Weka data mining suite. We compare classifier ensembles combined with MDTs to bagged and boosted decision trees, and to classifier ensembles combined with other methods: voting, grading, multi-scheme and stacking with multi-response linear regr...
متن کاملIncorporating canonical discriminant attributes in classification learning
This paper describes a method for incorporating canonical discriminant attributes in classification machine learning. Though decision trees and rules have semantic appeal when building expert systems, the merits of discriminant analysis are well documented. For data sets on which discriminant analysis obtains significantly better predictive accuracy than symbolic machine learning, the incorpora...
متن کامل